Extending Quality Assessment beyond Theclassroom: the Campus Computer Lab Scale
نویسنده
چکیده
Computers have assumed an increasingly important role in the educational process, and consequently, institutions of higher learning have sought to enhance the quality of computer access they provide on their campuses. Based on a study of student computer users at a large state university, this paper reports the purification of a psychometric scale designed to assess the service quality of campus computer labs. The scale consists of eight indicators, and it is intended to be used to monitor lab quality over time and assist in the planning of specific actions for quality improvement. overcome any deficiencies (Watson & Pitt 1998). Accordingly, this study reports the purification of an instrument intended for measuring student satisfaction with lab service quality. INTRODUCTION Public higher education is facing mounting pressures to deliver improved value in all its activities (Heck & Johnsrud 2000-1 Wellman 2001). Actions by parents, students and legislatures are demanding that additional attention be placed upon the performance of the faculty, the curriculum, and any university-provided services that contribute to the college experience (Brennan & Shah 2000-1 Evanbeck & Kahn 2001 -, Underwood 2000). And, where there is increased scrutiny, there is the need for objective assessment, benchmarking, and planning for ongoing improvement (Watson & Pitt 1998). SERVICE QUALITY ASSESSMENT Multi-item scales are generally superior to single-item measures for attitudinal measurement. The three principal deficiencies of single item scales that can be overcome through the use of multi-item scales include inconsistency over time, imprecision, and narrow domain representation (Spector 1992). Accordingly, its not surprising that the literature has regarded service quality to be a construct that represents a broad domain requiring measurement with multi-item scales. All of these activities require the development of appropriate metrics that can serve to assess services, and recent literature has provided measurement instruments for such on-campus services as library resources (White & Abels 1995), career services (Engelland et al. 2000), dining services (Stevens 1995), and academic advising (Abernathy and Engelland 200 1). The methodology and instruments proposed can be utilized as part of an ongoing program for improvement in the university experience. Very little information is available on the subject of student evaluation of computer lab service. Our literature search failed to locate any refereed journal articles relating to computer lab service quality or the development of a measurement instrument for this purpose. There is, however, a large stream of literature dealing with the assessment of service quality, beginning with SERVQUAl, (Parasuraman et al. 1988). The SERVQUAL scale contains five factors, but empirical studies have shown that these dimensions may not be generic for all situations (Carman 1990) or even for the same type of service when different cultures are represented in the sample (Kettinger et al. 1995). SERVQUAl, is designed to deduct reported perceptions from reported expectations as a computational approach, but this has not been universally adopted (Cronin & Taylor 1994). For purposes of this study, we do not wish to join the debate regarding the superiority of perceptiononly or gap-scored measures (Van Dyke et al. 1999, Kettinger & Lee 1999). We note, however, that despite One area where little assessment work has been reported is concerned with the campus computer labs that are provided for student use. These labs serve a large number each day, as students drop by to type papers, perform statistical analysis, access library and Internet sources, or check e-mail communications. However, anecdotal evidence indicates that many students are not pleased with the service quality of the computer labs provided on their campuses. Appropriate assessment instruments are needed so that institutions can evaluate the quality of the services they provide and make plans to instrument. Students were contacted by e-mail twice and provided with a link to the on-line survey instrument. Students were promised anonymity, and no attempt was made to identify any of the respondents through cookies or other tracking devices. E-mail requests were completed to 2446 students and 278 participated, representing 11.0 percent of the population. the fact that SERVQUAL gap-measures continue to be used (Jiang et al. 2000), expectations are hard to measure separately from perceptions (Carman 1990), and retrospective accounts of expectations may not be reliable (Golden 1992). Accordingly, the measurement approach adopted here is based upon measuring performance perceptions only. METHODOLOGY Returns were inspected individually for completeness, and 21 cases were eliminated because of excessive missing values, leaving 258 responses. Statistics for the mean, standard deviation, skewness, and standard error were reviewed before and after purging of these 21 cases. The differences in these statistics were minor only, and therefore the purging did not lead to any significant changes in the results. Respondent characteristics are summarized in Table 2. Development of the Item Pool As suggested by DeVellis (199 1), a large item pool was generated. Candidate items for the pool incorporated suggestions from students, the initial SERVQUAL scale (Parasuraman et al. 1988), a revised SERVQUAL scale (Engelland et al. 2000), and items selected from Swanson and Phillips' (1998) computer lab customer satisfaction survey. In developing the items, five guidelines were followed based upon Spector (1992), in which (1) each item expresses one and only one idea, (2) both positively and negatively worded items are developed, (3) colloquialisms, expressions and jargon are avoided, (4) the reading level of the respondents is considered, and (5) the use of negatives to reverse the wording of an item are avoided. Data Analysis Box and whisker plots were obtained for all items, resulting in the decision to eliminate three items based on high skewness and unbalanced distributions, as recommended by Clark and Watson (1995). In addition, three reverse-coded items were discarded because of problems with polarity (Herche and Engelland 1996). Consistent with Gerbing and Anderson (1988), an exploratory factor analysis was performed on the remaining items to gain insights into the factor structure. The scree plot showed a definite elbow after the first factor extracted, and the "mineigen one" rule concurred, indicating a one-factor solution (Hair et al. 1992). In addition, the factor analysis revealed significant loadings on the first factor for a in majority of the items. 41 Accordingly, the decision was made to pursue a unidimensional scale. A total of 50 items were developed to tap into various facets of computer lab service quality, including physical rooms, hardware, software, hours of operation, availability of computers, lab assistants, printing, computing safety, and privacy (see Table 1). Faculty members who had made lab reservations for class use within the past two months were recruited to serve as expert judges for a face validity test (De Vellis 1991, Bearden & Netemeyer 1999, Hardesty & Bearden 200 1). Consistent with Hardesty & Bearden (2001), we employed the preferred "sumscore" method of using expert judges' opinions and then selected items based on the combined score for all judges per item. This reduced the item pool to 42 items. In order to reduce redundancy, a purging was made using a combination of inspection of the item list to preserve the breadth of the domain, and inspection of corrected item to total correlations. The result was the elimination of most items with inter-item correlations higher than.70. The final scale was composed of eight parsimonious items (Table 1), with inter-item correlations ranging from. 15 to.41. The mean inter-item correlation of the final scale was, 31, which concurs with the guidelines of Clark and Watson (1995). Internal consistency reliability as measured by coefficient a was .744. An a level of .70 is considered respectable (De Vellis 199 1) and recommended for preliminary research (Nunnally 1978). Sample Characteristics The setting for the study was a college of business associated with a large US public university. The college provides two large computer labs for student use, and these were selected as the focus of the study. Data were collected via a web-based survey made available to all students with a business major. Demographic-related questions and a single item general satisfaction scale (I to 10) were included with the survey DISCUSSION Since a sufficiently high coefficient a is a necessary, but not sufficient condition for unidimensionality, a confirmatory factor analysis was performed (Kumar & Dillon 1987). Indices of fit were examined, including (1) an RMSEA of.03 1, which falls within the.05 guideline (Joreskog 1993)-,(2) a Goodness of Fit Index of .976, which exceeds the .90 guideline (Joreskog & Sorbom 1984)-,(3) an Adjusted Goodness of Fit Index of .957, which exceeds the .90 guideline (Joreskog & Sorbom 1984); (4) a Normed Fit Index of .925, which exceeds the .90 guideline (Bentler 1992); and (5) a Bentler Comparative Fit Index of .985, which exceeds the .90 guideline of Bentler (1992). The results provide strong evidence for unidimensionality. On the whole, the proposed eight-item scale appears to be a good representation of students' understanding of computer lab service quality at one college of business. Of course, different circumstances may exist in different labs at different universities, such as hours of operation, available equipment and printing policies. These differences could necessitate some modifications to the scale items. A follow-up survey is planned for all students at the focal university in order to explore the commonalities and differences among all computer labs on campus. Validity of the proposed measure should be further explored. One approach to do this is to begin the survey instrument with a single item service satisfaction measure, followed by an open-ended item "Please list the issues you considered when deciding on your overall service satisfaction level." The answers can be reviewed based on their relation to items on the list. Since no established scales for this construct were found in the literature, convergent validity could not be established by comparing the new scale with an established measure. However, convergent validity can be shown by two scales loading on the same factor (De Vellis 1991). The overall satisfaction item, which can be considered a single item scale, loaded on the same factor as all items in the new scale. No attempt was made to establish discriminant validity for this exploratory research, and due to the limited theoretical foundation, no predictions from the theory could be formed to test nomological validity. The web-based method of data collection employed here can generate a substantial number of responses within a short period of time and is encouraged in future computer lab research. Furthermore, students who use computer labs are certain to be familiar with the use of web browsers, and should have no difficulty using the questionnaire in this form. Computer lab administrative staff could consider using a pop-up message requesting participation in the survey, appearing at regular intervals or connected to the log-on process. Use of the eight-item scale is recommended to increase participation, but inclusion of other scale items could be considered, especially if problem areas are suspected. Development of norms is the final step in Churchill's paradigm of measure development (Churchill 1979). Accordingly, the results of the instrument are reported for future comparisons to other populations of interest. When placed on a 5 -point scale, the sum of the scores on the eight items divided by 8 returned a mean value of 2.99, a standard deviation of .697, a range between a minimum of I and a maximum of 4.75, and a median value of 3. Sixty-eight percent of the scores fall between 2.3 and 3.7, 95 percent of the scores between 1.6 and 4.4, and 99 percent of the scores between 1.00 and 4.75. It is hoped that this instrument has the potential to serve as a cost-effective gauge of student service quality satisfaction. Results of the survey may be used to trigger action when the scores fall below the norm or below a target score selected by the institution. Low scores on individual scale items can be used to identify areas to be targeted, avoiding allocating resources to areas where students are satisfied while their real concerns are not addressed.
منابع مشابه
Quality Assessment of Staff in-service training from View Points of Employees Ahvaz Jundishapur University of Medical Sciences
Background: Medical sciences universities have great mission to train efficient, professional and committed human resources to solve the health need of people. This research aimed to study the quality assessment of staff in-service training from viewpoints of employees Ahvaz Jundishapur University of Medical Sciences based on servqual model between 2011-2015. Methods: This is a descriptive, cro...
متن کاملApplying the Grid Computing Paradigm within a Liberal Arts Academic Environment
Until very recently analyses and computations on a large scale were feasible only on supercomputers or clusters of high-end processors. Such computational infrastructure requires massive investments that can be unrealistic in a liberal arts college environment such as that found at The College of New Jersey (TCNJ). However, TCNJ has several state-of-the-art campus computer labs for use by stude...
متن کاملBeyond data in the smart city: learning from a case study of re-purposing existing campus IoT
In this article we present a case study of our experiences of using existing IoT infrastructure to create a campus scale “living laboratory” for promoting energy savings and environmental sustainability. As a series of lessons for others creating IoT systems from existing city infrastructures we offer the challenges we have experienced through our attempt to join up and re-purpose existing ener...
متن کاملDo Off-campus Courses Possess a Level of Quality Comparable to That of On-campus Courses?
The purpose of this study was to describe and compare perceptions of the quality of on-campus and off-campus courses held by students enrolled in courses offered through the College of Agriculture OffCampus Professional Agriculture Degree Programs @= 173) and faculty members with teaching responsibilities or with teaching experience in the same college of agriculture ( N = 2 6 2 ) . Faculty and...
متن کاملAn ontology-driven reading agent
Textual data—from manuscripts to publications to website content—contains much of extant human knowledge. Unfortunately, the ability to harvest and effectively use this information beyond simple search/retrieval is greatly hampered by the scale of the “reading” problem: there is too much for any one person to read, and computers are not entirely adept at comprehending all information—explicit a...
متن کامل